039-rocket.png

MEC3079S: Control Systems

Chapter 5 - Prediction of system response

Table of Contents

5.1 Introduction

In this Chapter we will develop a deeper understanding between time-domain and Laplace-domain signals and systems, with the intention of predicting the system response of a modelled system. More specifically, if we have a model for , then how can we expect the output response to behave for a set of defined input signals .
As we introduced in Chapter 1, there are three key considerations in a control system, namely:
  1. Transient response — how quickly the output response settles after the system has been perturbed, and how oscillatory/erratic is the response during this period.
  2. Steady-state response — the final value that the response takes as time tends to infinity (if there is one).
  3. Stability — whether the output response converges to a finite value, or diverges to as time tends to infinity.
We will now begin to explore powerful Laplace domain techniques that can answer the above equations for any linear, time-invariant system.

5.2 Poles and zeros

5.2.1 Definition

As shown in Chapter 2, any arbitrary transfer function, , can be written as a ratio of polynomials
,
where (the coefficients are real numbers) and (the denominator polynomial has equivalent or higher order than the numerator polynomial). Using factorization, can be rewritten as a ratio of irreducible polynomials
where
Note that and are complex numbers in general: . Importantly, a real number is a special type of complex number, just with the imaginary component set to zero ( e.g. ). We can also think of as a value of s that makes , namely
.
Similarly, is a value of s that makes tend to infinity, namely
.
Importantly, this definition assumes that . In other words, simplification of the factors above is not possible, hence the term ratio of irreducible polynomials.
We will refer to the terms as the zeros of , and the terms as the poles of . As we will see later in this Chapter, the poles and zeros of our system actually tells us a lot of important information that we can use to determine performance and stability of our system. Note that the order (or degree) of the numerator polynomial, , is equal to the number of zeros of , and the order (or degree) of the denominator polynomial, , is equal to the number of poles of .

Example 1

Determine the poles and zeros of .
The numerator polynomial is -order, so there no zeros.
We determine the poles of our system by setting the denominator polynomial equal to zero: . The roots of this equation is , and .
%Example 1
s = tf('s');
P = 2/( s*(s/5+1) );
 
[p,z] = pzmap(P) %determine the poles, p, and zeros,z, of P(s)
p = 2×1
0 -5
z = 0×1 empty double column vector

Example 2

Determine the poles and zeros of .
We find the zeros by setting the numerator polynomial equal to zero: . The root of this equation is then .
We determine the poles of our system by setting the denominator polynomial equal to zero: . The roots of this equation is , and .
%Example 2
s = tf('s');
P = 2*(0.5+s)/( s^2+2*s+1 );
 
[p,z] = pzmap(P) %determine the poles, p, and zeros,z, of P(s)
p = 2×1
-1 -1
z = -0.5000

5.2.2 Relative degree

The relative degree of a transfer function is defined as the difference between the degree of the transfer function's denominator (equivalent to the number of poles of ) and degree of the numerator (equivalent to the number of zeros of ). Using the notation above, the relative degree is expressed as . As will be explained later, we will only consider systems that have a relative degree of at least zero, which implies that .

Strictly proper transfer function

A transfer function is classed strictly proper if the degree of the numerator is less than the degree of the denominator. This is the same as requiring that , or that there are more poles in the transfer function than zeros.
For example, is a strictly proper transfer function, as the numerator polynomial is of lower degree than that of the denominator polynomial, or equivalently, there are less zeros than poles — in this case, no zeros and one pole. More formally, a system is said to be strictly proper if its relative degree is greater than zero.

Biproper transfer function

A transfer function is classed as biproper if the degree of the numerator is equivalent to the degree of the denominator (relative degree equal to zero). This is the same as requiring that , or that there are an equivalent number of poles and zeros in the transfer function.
For example, is a proper transfer function, as both the numerator and denominator polynomials have the same degree, or equivalently, there is the same number of poles and zeros — in this case, one zero and one pole.

Proper transfer function

A transfer function is classed as proper if the degree of the numerator does not exceed the degree of the denominator. This is the same as requiring that , or that there are at least as many poles in the transfer function as there are zeros. All systems we will consider are proper, which is a requirement for causality. Notably, strictly proper systems fall within a subset of proper systems.
Both and are examples of proper transfer functions.

5.2.3 Pole-zero cancellation

Pole-zero cancellation occurs when a pole and zero are equivalent, which results in a cancellation of the two factors.

Example 1

Given , determine the corresponding poles and zeros of the system.
Using factorisation, .
Note that the system appeared to have one zero and two poles, but after simplification it turns out that there is only a single pole at .
s = tf('s');
P = (s+5)/(s^2+10*s+25)
P = s + 5 --------------- s^2 + 10 s + 25 Continuous-time transfer function. Model Properties
 
P = minreal(P) %the minreal function performs pole-zero cancellation if a cancellation exists.
P = 1 ----- s + 5 Continuous-time transfer function. Model Properties
[p,z] = pzmap(P)
p = -5
z = 0×1 empty double column vector

Example 2

Given and , determine the poles and zeros of .
Multiplying the two transfer functions: .
Therefore, as a result of pole-zero cancellation, has a zero at and poles at and .
s = tf('s');
P = (s+1)/(s+2);
G = (s+3)/(s+1)/(s+5);
L = P*G
L = s^2 + 4 s + 3 ----------------------- s^3 + 8 s^2 + 17 s + 10 Continuous-time transfer function. Model Properties
L = minreal(L)
L = s + 3 -------------- s^2 + 7 s + 10 Continuous-time transfer function. Model Properties
[p,z] = pzmap(L)
p = 2×1
-5.0000 -2.0000
z = -3

5.2.4 The s-plane

The pole and zero locations in the complex plane infer a variety of characteristics of the system, such as rate of exponential growth/decay, frequency of oscillation, and stability (more on this later). We can explicitly define the pole of as
,
where is the real component of pole i and is the corresponding imaginary component. We can similarly define the zero of as
.
Each pole and zero of can be visualised on the complex plane, hereafter referred to as the s-plane, where poles are indicated with a × symbol, and zeros are shown with a symbol, as seen in Figure 5.1.
Figure 5.1: Example of a pole and zero in the s-plane.
Recall that the x-axis of the complex plane indicates the real component of a complex number, and the y-axis indicates the imaginary component.
s = tf('s');
 
P = "(a+s)/(b+s)";
a = 1;
b = 4;
 
figure,clf,hold on
pzmap( eval(P) )
 
[p,z] = pzmap( eval(P) )
p = -4
z = -1

Open and closed left half-plane

We will refer to the left half of the s-plane, excluding the imaginary axis, as the open left half-plane (OLHP). Specifically, the OLHP includes all complex numbers for which the real component of said complex number is less than zero.
If we want to also include all complex numbers with real components equal to zero, we refer to this region as the closed left half-plane (CLHP).

Open and closed right half-plane

Similar to the OLHP and CLHP, we refer to the right half of the s-plane, excluding the imaginary axis, as the open right half-plane (ORHP). Specifically, the ORHP includes all complex numbers for which the real component is greater than zero. If we want to also include all complex numbers with real components equal to zero, we refer to this region as the closed right half-plane (CRHP).
Based on the definitions above, there is no intersection between the OLHP and ORHP, but the intersection of the CLHP and CRHP is described by the imaginary axis. We will use these definitions to make sense of stability in future sections.

5.3 Partial fraction expansion

We saw in Chapter 2 that we can make use of the inverse Laplace transform to determine given a that is listed in the Laplace transform table, such as in Table 5.1.
Table 5.1: Laplace transform table
However, in many cases the signal that we want to apply the inverse Laplace to may not be represented in our table. To circumvent this issue we can make use of partial fraction expansion to reduce an unknown signal (or system) into a sum of signals that are individually recognisable in the transform table. Based on the linearity property of the (inverse) Laplace transform, we then simply take the inverse Laplace transform of each individual signal (or subsystem) and then sum the result.
We will also soon see that our pole and zero locations play a key role in how our system will respond to input perturbations.

5.3.1 Transfer functions with distinct poles

Recall that we can describe a transfer function as a ratio of polynomials
,
and assuming that we are only dealing with distinct poles (), we can use the partial fraction expansion to represent as a sum of transfer functions, namely
where
and
Note that in general, whereas . Note that if is strictly proper (), then k will equal zero. It is also worth highlighting that must be represented with a monic denominator polynomial in order to result in the correct values.
We will only consider the system response of strictly proper systems (where , implying that there are more poles than zeros) in this course (although the extension to proper systems is trivial). It follows that , which simplifies the partial fraction expansion of to
where
.

Example

Determine the partial fraction expansion of .
Based on the formulation above we can express as
We then determine and using
,
,
which results in
We can use the residue function in MATLAB to find , , , .
P = 5/( s+s^2/10 );
[num,den] = tfdata(P,'v');
 
[r,p,~] = residue(num,den) %find residuals (r1,r2) and poles (p1,p2) of P(s)
r = 2×1
-5 5
p = 2×1
-10 0

5.3.2 Transfer functions with repeated poles

Consider the system
,
where poles are repeated, namely
.
We can then use partial fraction expansion to express strictly proper as
where
,
and
.
Note that in general. The partial fraction expansion of a transfer function with repeated poles is definitely more involved, but we will often only deal with systems that have up to repeated poles.

Example

Determine the partial fraction expansion of .
Based on the formulation above we can express as
We then determine ,, and using
,
,
,
which results in
P = 1/( s^2*(s+1) );
[num,den] = tfdata(P,'v');
 
[r,p,~] = residue(num,den) %find residuals (r3,r2,r1) and poles (p3,p2,p1) of P(s)
r = 3×1
1 -1 1
p = 3×1
-1 0 0

5.4 Impulse response

5.4.1 Definition

The impulse response of a system is the output generated from an impulse input (commonly also referred to as the dirac delta function) of the form
This is also commonly referred to as the natural response, with a graphical representation shown in Figure 5.2.
Figure 5.2: Example representation of an impulse response.
The appeal of using an impulse input signal is that the Laplace transform of is , based on
This means that the corresponding impulse response (the response of the system output to an impulse input) will be
.
Under these conditions the partial fraction expansion of follows from the previous sections depending on whether there are repeated roots in the denominator or not, namely
What we should notice here is that the impulse response will be made up of a sum of terms, whereby each pole of contributes a term to the response of .

5.4.2 Impulse response of a transfer function with real, distinct poles

In the case that each pole of strictly proper is distinct, we have shown that the impulse response will be
,
where
,
and is the pole of our system. Recall from our Laplace transform table that an exponential response in the time domain is described in the Laplace domain by
.
Based on this transform, above can be inverse Laplace transformed to determine as
.
The result above means that when an impulse input is applied to a strictly proper system with distinct poles, the resulting impulse response will be a sum of (real-valued or complex) exponentials.
If all poles of are real, namely
,
it follows that . The time-domain impulse response of a strictly proper system is then made up of a sum of real-valued exponentials,
.
Based on our understanding of the exponential function for real-valued , we know that
In other words, if , then the exponential term will tend to zero over time, whereas if , then the exponential term, and by extension, , will grow without bound and tend to infinity over time. This means that if for any one pole in , then will run off to infinity. On the other hand, if , then the impulse response, , will tend to zero, as all the exponential terms that make up tend to zero over time. The special case of simply means that there is no exponential response for pole i, and the particular response is instead a step signal with value .
We can easily visualise the real-valued poles in the s-plane, as shown in Figure 5.3. The above equation suggests that pole i lying in the open left half-plane (OLHP), implying , will result in the corresponding term tending to zero with time.
Figure 5.3: S-plane showing (left) all poles in OLHP, which implies that system response will converge to zero, and (right) one pole in ORHP, which means that system will diverge.
Therefore, if all poles of the system lie in the OLHP, then
will converge to zero, namely
.
Figure 5.3 (left) is an example of such a case. In the converse, if any one pole lies in the open right half-plane (ORHP), as shown in the Figure 5.3 (right), the system will have a divergent impulse response (even though the other two poles are in the OLHP).

Example

Given that the input is an impulse, , and , where , determine .
The output signal follows as , with the poles of at and . Using partial fraction expansion on , we get
Using our residual formula,
.
Therefore
.
We can confirm that the partial fraction expansion is correct by resolving the right-hand side term above to be a single term, or by using the residue function as before. Finally, we convert to using the Laplace transform table and the linearity property:
The time-domain response of as well as each individual exponential response is shown using the code below, where a and b can be adjusted.
%Example
s = tf('s');
 
a = 3; %p1 = -a
b = 10; %p2 = -b
P = 1/(s+a)/(s+b);
 
P1 = 1/(b-a)/(s+a);
P2 = -1/(b-a)/(s+b);
 
figure,hold on
impulse(P,'k')
impulse(P1,'b')
impulse(P2,'r')
legend('y(t)','y1(t)','y2(t)')
Playing around with different a and b values, you should notice the following:
Remembering that a and b are the locations of our poles, it appears that the pole location has a deterministic affect on how behaves. The code below plots the positions of the poles.
figure, hold on
pzmap(P)
plot(-a,0,'xb')
plot(-b,0,'xr')
text(-a,.1,'a')
text(-b,.1,'b')

5.4.3 Impulse response of a transfer function with real, repeated poles

In the case that there are x repeated poles in a strictly proper system, we have already shown that
If all poles are real, namely , the second term from the above equation of will behave as in the previous section, resulting in a sum of exponential responses, , whereby the convergent/divergent behaviour is determined by whether is less/greater than zero.
Given that all poles of are known to be real (), the first term of ,
,
is equivalent to a sum of -order functions, , with a frequency shift of , based on the frequency shift property
.
For example, if , then
,
and the inverse Laplace transform follows as
As before, if , then the above equation will tend to zero based on the exponential decay based on L'Hopital's rule of
and if then the equation will tend to infinity. Interestingly, if , then the above term reduces to , which is a ramp-like signal that runs off to infinity with t.
In general, the inverse Laplace transform of follows as
,
with the same convergent requirement of .
s = tf('s');
 
a = 2; %repeated pole of p1=p2 = -a
b = 5; %p3 = -b
P = 1/(s+a)^2/(s+b);
 
[num,den] = tfdata(P,'v');
[r,p,~] = residue(num,den)
r = 3×1
0.1111 -0.1111 0.3333
p = 3×1
-5.0000 -2.0000 -2.0000
 
figure,hold on
impulse(P,'k')

5.4.4 Impulse response of a transfer function with complex, distinct poles

Section 5.4.2 considered the case when the poles of are real and distinct. However, systems can also contain complex poles. Consider the case where the impulse response is given by
where . In other words, two of the n poles are complex:
.
It follows that , based on
.
The second term from of will behave as in the previous section, resulting in a sum of real-valued exponential responses, , whereby the convergent/divergent behaviour is determined by whether is less/greater than zero.
Complex poles always come in conjugate pairs, namely , where and . Taking the inverse Laplace transform of , this has the structure of
,
where it can be shown that . Defining , we can write the above equation as
Using Euler's formula of , we can reduce the above equation to
This result can be further reduced, using the trigonometric identity of , to
where . Notably, the imaginary component of the complex pole, , determines the frequency of the co-sinusoid, whereas the the real component of the complex pole, , determines the exponential behaviour of the signal.
The code below shows that the result above is equivalent to using MATLAB's impulse function when the system has two complex conjugate poles, namely .
%Example
s = tf('s');
 
a = 3; %real component of complex conjugate pole pair
b = 5; %imaginary component of complex conjugate pole pair
p1 = a+1i*b;
p2 = p1'; %determine complex conjugate of p1
 
P = 1/(s+p1)/(s+p2)
P = 1 -------------- s^2 + 6 s + 34 Continuous-time transfer function. Model Properties
[y,t] = impulse(P);
 
[num,den] = tfdata(P,'v');
[r,p,~] = residue(num,den);
v = real( r(1) );
w = imag( r(1) );
 
phi = -atan( w/v );
a = real( p(1) );
b = imag( p(1) );
y_ = 2*sqrt(v^2+w^2)*exp(a*t).*cos(b*t-phi);
 
figure, hold on
plot(t,y,lineWidth=2)
plot(t,y_,'r--',lineWidth=2)
The cosinusoidal response from the complex conjugate pole pair of is bounded in amplitude by and will either
  1. exhibit a damped cosinusoidal response (the cosine will shrink over time),
  2. maintain an undamped response (the cosine amplitude will remain unchanged over time), or
  3. exhibit an unstable response (where the cosine amplitude increases over time).
We can use our understanding of the exponential limit below to determine how the amplitude of the cosine will change with time based on
So will tend to zero over time if (case 1 above). However, if , then the amplitude of the cosine function will grow with time (case 3). Finally, if , the cosine will continue for all time, with a constant amplitude of (case 2).
We can therefore deduce that the component of an impulse response of a system with a complex conjugate pole pair will converge to zero if , which is the same convergent condition for our system with strictly real poles. If all the system poles are plotted in the s-plane (including real-valued poles), we can easily determine whether the impulse response will converge to zero or diverge based on their locations in the s-plane. Specifically, if all poles are located in the open left half-plane (as shown in Figure 5.4 (left)), the impulse response will converge to zero, whereas if any one pole resides in the open right half-plane (as shown in Figure 5.4 (right)), the response will diverge to infinity.
Figure 5.4: S-plane showing (left) all poles in OLHP, which implies that system response will converge to zero, and (right) one complex-conjugate pole pair in ORHP, which means that system will diverge.

Arbitrary number of complex conjugate pair of poles

The derivation above can be extended to an number of compex conjugate pole pairs by considering the contribution of each complex conjugate pair seperately, namely
,
and this result in a sum of co-sinusoids of the form
The signal above is guaranteed to converge to zero if , and will result in an unstable output if for any one pole.

Issue with impulse inputs

Impulse inputs of are mathematically appealing input signals, based on the fact that . We effectively get to set the output signal equal to that of the system under consideration. However, the problem with impulse inputs is that we cannot physically manifest a signal that is infinitely large for an infinitesimally small period of time (based on the mathematical definition). Approximations of an impulse signal do exist, such as a pulse function, which has a finite magnitude and duration, but the Laplace transfer function will no longer be equal to one.
An alternative approach is to use so-called step input signals, which is detailed in the following section.

5.5 Step response

5.5.1 Definition

The unit step input is a very useful signal that can be used to assess the stability and performance of a system. Importantly, it gets around the issue of the impulse response not being realisable in general. It is formally defined as the Heaviside step function, , in many mathematical textbooks. In essence, it is a signal that has a value of zero for all negative time () and 1 everywhere else (), namely
The output generated from a unit step input is known as the unit step response, with an example of the time-domain response shown in Figure 5.5. Think of the step response as the system's response to a sustained, instantaneous constant perturbation.
Figure 5.5: Example of a system's response to a step.
We have already defined the unit step function to be
Note that the unit step function is defined when the step input has a magnitude of one. Using our previous formulation of , the unit step response can be described as
.

5.5.2 Step response of a transfer function with real, distinct poles

Recall that a strictly proper plant with real, distinct poles is described as
where
.
Assuming that does not have a pole at , the corresponding step response follows as
The first term is a result of the natural response of the system (taking the same form as the impulse response but with different values), and the second term is a result of the forced response of the system (from the step input).
The corresponding time-domain response is found by applying the inverse Laplace transform, which yields
where . Reflecting on the equation above, if all poles lie in the OLHP (), then will converge to as time tends to infinity, namely
.
How quickly the step response tends to is also dependent on the magnitude of values. Conversely, if any one pole lies in the ORHP () then will tend to infinity. Notice that this is exactly the same condition that we found when dealing with the impulse response.

Example

Given that the input signal is a unit step, , and , where , determine .
The output signal follows as
.
Using our residual formula,
,
and follows as
.
Converting to using the Laplace transform table, we get
The time-domain response of is shown below, where k and a can be adjusted.
%Example
s = tf('s');
k = 1;
a = 2; % pole at p=-a
P = k/(s+a);
figure,clf
step(P)

5.5.3 Step response of a transfer function with real, repeated poles

Recall that if there are x repeated poles in a strictly proper system, we can expand as
In the case of the step response we have , which implies that
Importantly, this assumes that is a distinct pole (we will cover this scenario in more detail in a later section). As in the case of the impulse response, the first term above, when inverse Laplace transformed, will reduce to a sum of -order functions with exponential coefficients, where each term will decay to zero if . After applying the inverse Laplace transform, the second term above will reduce to a sum of exponential functions, where the values will dictate whether the individual responses converge to zero or run off to infinity. The third term in simply corresponds to in the time domain.
The result above means that we can again make use of our "impulse response" understanding when determining whether will be convergent or divergent, with the main difference being that when .

5.5.4 Step response of a transfer function with complex, distinct poles

In the case that has two complex conjugate poles, and , we can decompose our plant as before using
where . When is a unit step input, the output response, , follows as
We can therefore make use of our "impulse response" understanding to predict the behaviour of the first two terms in the equation above. When applying the inverse Laplace transform, the first term will result in an exponential cosinusoid, the second term will be a sum of exponential functions, and the third term will be a constant, namely
where , , and .
The code below demonstrates the step response for a system of the form .
s = tf('s');
 
a = -6; %real component of complex conjugate pole pair
b = 5; %imaginary component of complex conjugate pole pair
 
p1 = a+1i*b;
p2 = p1'; %determine complex conjugate of p1
 
P = 1/(s-p1)/(s-p2)
P = 1 --------------- s^2 + 12 s + 61 Continuous-time transfer function. Model Properties
[y,t] = step(P);
 
[num,den] = tfdata(P/s,'v');
[r,~,~] = residue(num,den)
r = 3×1 complex
-0.0082 + 0.0098i -0.0082 - 0.0098i 0.0164 + 0.0000i
 
figure
plot(t,y,lineWidth=2)
axis tight

Example

Given that the input is a unit step, , and determine .
The output signal follows as . Using partial fraction expansion on , we get
.
As expected, the two plant poles are complex conjugates: .
Using our residual formulas from before, we get
The output response follows as
The time-domain output response is determined by taking the inverse Laplace transform
which will take the form of
where , , and . Using this formulation, we determine as
where , , , , and rad. The time-domain step response of is generated using the code below.
%Example
s = tf('s');
P = 1/(s^2+2*s+2);
[y,t] = step(P); %step response of P(s) using MATLAB function
 
a = -1;
b = 1;
v = -0.25;
w = 0.25;
phi = -atan2(w,v);
y_ = 2*sqrt(v^2+w^2)*exp(a*t).*cos(b*t-phi)+0.5; %step response of P(s) based on mathematical formulation.
 
figure,hold on
plot(t,y,lineWidth=2)
plot(t,y_,'--r',lineWidth=2)
axis tight
 

5.6 Response to an arbitrary input

The impulse and step input are two examples of the input signals that are used, but in the general case, the input can take any physically realisable form (more on this later). An example of this is shown in Figure 5.6.
Figure 5.6: Example of a system's response to an arbitrary input.
An arbitrary input signal, , can also be described as a ratio of factorised polynomials in a similar manner to that of :
If is applied to system , and assuming for sake of simplicity that all poles in are distinct, then using , the output signal follows as
where
.
The first term describes the contribution of the plant poles to the output response, and the second term describes the contribution from the input signal. The generalised result above for distinct poles in means that the partial fraction expansion of will involve first-order transfer functions that are parameterised by the poles from
with a corresponding time-domain response of
where . The output response to an arbitrary input, and by extension, the convergent or divergent behaviour, will therefore depend on both the pole locations of and . Note that if contains at least one pole in the ORHP (), then will tend to infinity with time, regardless of the poles in .

5.7 Stability

System stability is an important property of any control system, and as we will see later, we cannot hope to achieve transient and steady-state performance specifications if our system is unstable. You can think of instability when a system's output grows without bound and tends to infinity as time increases. However, that description is a bit loose, as there may be certain input signals that result in the output response of a particular system tending to a finite value, whereas other input signals can result in the output tending to infinity with time. We will therefore classify the stability of a system in one of three ways, namely:
Importantly, when evaluating the stability of the system, the assumption is that the input signal is bounded (it is not running off to infinity). Otherwise the question of whether the system itself is unstable becomes moot.

5.7.1 Unstable systems

The preceding sections have shown that regardless of the structure of the plant (e.g. whether the poles are distinct, real, and/or complex) the output response of our plant will tend to infinity when the real component of any one pole is greater than zero (), with an example depicted in Figure 5.7 in the s-plane. Under this condition, we refer to our system as unstable. Note that this is applicable regardless of whether the poles are real or complex.
Figure 5.7: S-plane representation of an unstable system, as at least one pole is in the ORHP.
We can therefore easily check to see if a system is unstable by finding all the poles and then deducing whether any poles lie in the ORHP.

Example

Determine whether is unstable.
Using MATLAB's pole function, we can check the locations of the poles of as shown below.
s = tf('s');
P = (s+5)/(s^3+s^2-s+1);
P = s + 5 ----------------- 5 s^2 + 10 s + 25 Continuous-time transfer function. Model Properties
p = pole(P)
p = 2×1 complex
-1.0000 + 2.0000i -1.0000 - 2.0000i
We can also use pzmap to plot the poles and zeros in the s-plane and then visually check to see if any poles are in the OHRP.
figure
pzmap(P)
Based on the above results, it is clear that two poles are in the ORHP (or ). Therefore is classed as unstable.

5.7.2 Marginally stable systems

Marginally stable systems are those that exhibit a stable response for certain input signals, but become unstable for particular, bounded signals. We have actually already seen an example of this Section 5.4.3. Formally, a system will be marginally stable if a distinct pole lies on the imaginary axis. Under this condition, if an input signal contains at least one pole that matches the pole of lying on the imaginary axis, then the output response is guaranteed to be unstable.
If the pole is at the origin, i.e. , then is represented by
If a step input, , is now applied to the system, then the output response in the Laplace domain is
which results in a repeated pole at . Using our working from earlier, and assuming for sake of simplicity (without loss of generality) that the other poles are distinct, the time-domain output response can then be determined as
where . The first term, resulting from the two poles at , will then run off to , depending on the sign of . A simple example where is shown using the code below, where an input signal of results in an unstable response.
s = tf('s');
P = 1/s;
figure
step(P)
If the distinct poles lying on the imaginary axis are complex, i.e. , then we can describe as
If we apply an input signal that matches the frequency of the poles lying on the imaginary axis, namely , then the corresponding Laplace transform input signal is
The output response, , follows as
It can be similarly shown that the time-domain output response will diverge as a result of the repeated poles that lie on the imaginary axis. A simple example is shown using the code below where , and results in an unstable response.
s = tf('s');
 
b = 4; %frequency of sinusoid and location of poles on imaginary axis
P = 1/(s^2+b^2);
 
t = 0:1e-3:15;
u = sin(b*t);
 
figure
lsim(P,u,t)

5.7.3 Strictly stable systems

The previous sections have shown that if any one pole in lies in the ORHP or on the imaginary axis, then the output response is not guaranteed to be stable. More formally, if any one pole of lies in the CRHP (), then we cannot be sure whether the output response will be stable for all bounded inputs. Conversely, we have shown that if all poles are located in the OLHP (), then the output response is guaranteed to be bounded if the input signal is bounded. Under this condition we can say that our system is strictly stable.
Equivalently, a system is classed as bounded-input,bounded-output (BIBO) stable if it yields a bounded output for any bounded input. This means that we may assess system stability using any arbitrary input signal, , with stable poles (meaning that the input is bounded). If the corresponding set of output responses are all bounded, then our system is BIBO stable. Therefore, the requirement for a system to be BIBO stable is , which is equivalent to being strictly stable.
We will refer to strictly stable (or equivalently, BIBO stable) systems from this point onwards as stable systems, with the implication that they meet the aforementioned conditions. The difference between a stable and unstable system is summarised in Figure 5.8.
Figure 5.8: Summary of effect of pole values in s-plane on system stability.

Example

Determine whether is unstable.
Using MATLAB's pole function, we can check the locations of the poles of as shown below.
s = tf('s');
P = (s+5)^2/(s^3+2*s^2+5*s+1);
p = pole(P)
p = 3×1 complex
-0.8916 + 1.9541i -0.8916 - 1.9541i -0.2168 + 0.0000i
We can also use pzmap to plot the poles and zeros in the s-plane and then visually check to see if any poles are in the OHRP.
figure
pzmap(P)
Based on the above results, it is clear that all poles are in the OLHP (or ). Therefore is classed as stable.

5.8 Initial and final value theorem

Initial value theorem and final value theorem are useful mathematical tools that can be used to ascertain initial and final (steady-state) behaviour of a time-domain signal, after an input signal has been applied to a system.

5.8.1 Initial value theorem

The initial value theorem (IVT) determines the initial value of a signal, as the time parameter approaches zero: . In the time domain, this can be represented as
.
We can also determine an equivalent form of the equation above that considers instead of . This is useful if we are working in the Laplace domain (as we will tend to do in this course).
Consider the Laplace transform of a derivative
.
The result above can be reorganised as
.
Taking the limit as on both sides of the equation,
We are therefore also able to determine the initial value (value at ) of using
.

Example

Use initial value theorem to determine the intial value of .
Using IVT, we obtain
.
Note that we made use of L'Hopital's rule to deduce the limit.
s = tf('s');
 
a = -10;
b = 5;
Y = a/(s+b);
y0 = a
y0 = -10
 
figure
impulse(Y)

5.8.2 Final value theorem

The final value theorem (FVT) determines the steady-state value of a signal, as the time parameter tends to infinity: . In the time domain, this can be represented as
.
We can also determine an equivalent form of the equation above that considers instead of . The final value theorem is derived from the Laplace of a derivative
.
The result above can be reorganised as
.
Taking the limit as on both sides of the equation,
We are therefore also able to determine the final (steady-state) value of using
.
Note that final value theorem is only applicable to signals that are bounded (they converge to a finite value as ).

Example

Use final value theorem to determine the steady-state value of .
Using FVT, we obtain
.
s = tf('s');
 
a = -10;
b = 5;
Y = a/(s+b);
y_inf = 0
y_inf = 0
 
figure
impulse(Y)